SEIR modeling - path specific vs. population level
An overview of SEIR models and probability distributions of periods of incubation and recovery
- Motivation for write-up
- Background on disease dynamics within an individual vs. population level disease dynamics
- Numerical solution to this deteministic population level model
- New Section
The COVID-19 pandemic has brought a lot of attention to study of epidemiology and more specifically to the various mathematical models that are used to inform public health policies. Everyone has been trying to understand the growth or slowing of new cases and trying to predict the necessary sanitary resources. This blog post attempts to explain the foundations for some of the most used models and enlighten the reader on two key points.
First I want to introduce the concept of disease dynamics and how this notion differs from the point of view of an individual to that of a population i.e. how a disease progresses within an individual versus how a disease progresses on a population level (and how models make assumptions on each).
A discussion follows on deterministic models (not subject to random chance) versus stochastic models (inclusion of random chance).
When modelling infectious diseases and pandemics in particular, a key ask is to predict the number of infected people at any given time. From this simple qestion results the idea of compartmentalization of the population i.e. the division of the population into the two most basic categories:
- those that are infected
- those that are not
This is ultimately the foundation for all compartmental models in epidemiology.
The nuances between the models then come from how the above two groups are further compartmentalized. That is to say, how we decide the composition of the infected and the not-infected groups.
The infected group for example could be further sub-categorized into:
- asymptomatic
- symptomatic
Or
- no treatment necessary
- require treatment:
- local Doctor visit
- hospitalization
- admitted to intensive care unit
As you can see there are many ways to do this, but the more categories you have the more difficult it might become to model. Usually we determine these subcategories in order to match data available.
What about the non-infected group? Again there are many ways to subdivide this group but the most obvious first subdivision would be to separate those susceptible to the disease from those that are immune.
Let's have a look at a basic compartmental model, the SIR model.
- S --> Susceptible state:
An S individual is simply someone susceptible to the disease, that is anyone in the population who is healthy and not immune to the disease.
- I --> Infectious state:
Once an individual is exposed to the disease he will develop this disease and becomes infectious.
- R --> Recovered state:
An individual will either fight off the infection (with the help or not of treatment) or die. These are all included in the R state. In the basic SIR model, anyone R has aquired full and infinite immunity and cannot catcht the disease again (of course many variations can be included to reflect more closely a disease).
In this write-up we will focus on the SEIR models, which are similar to the SIR compartments above with the additional E state between S and I.
- E --> Exposed state:
The exposed state is the state when an individual has been exposed to the disease, but has not become infectious yet.
It should be noted the incubation period, the time between exposure to disease and development of the disease is not necessarily the same as the time between the states E and I. Indeed, an individual may become infectious before the appearance of the symptoms (or at the same time, or after, but not necessarily the same time).
By individual dynamics, I mean the progression of disease within an individual i.e. the progression of an individual from one state to another. In the models used here, an individual starts at S (although an initial exposed or infectious person is injected into the population), if exposed to disease he will move into the state E, after which he will move to the I state with probability 1, but in a time unique to himself, after which he will move into the state R with probability 1, and again in a time unique to him. From state R he will stay in state R (either dead or has aquired full and inifite immunity). Let's have a closer look:
S → E
The chances of an individual going from S → E depends on three things:
- the number of contacts the person has per unit time
- the chance a given contact is with an I (higher thenumber of I, higher the chance)
- the chance of an S contracting the disease from a contact with an I
E → I
All people exposed will eventually develop disease. However, individually, a person might go from E to I on the first day, or in 10 days, this is proper to the individual. Everyday additional day following exposure, the probability of this individual to go from E → I increases (we will have a loook at the proability distribution and its importance later).
I → R
Similarly, all infectious people will recover (or die). Again, individually, a person might go from I to R in 5 days or in 15 days, this time is the recovery time and is proper to the individual.
Most basic models tend to disregard the notion of individual dynamics above in favor of poopulation level dynamics. That is to say the models tend to model disease on a population level without looking at the specifics pogression of disease within the individuals and using averages instead (although the S → E uses the same logic as above).
Below is an explanation for the SEIR model with its mathematical formulation.
Note no births or deaths are included.
S → E
As stated above, going from S to E depends on:
- the number of infectious people in the population: $\frac{I(t)}{N}$
- the average number of contacts an individual has per day: $r$
- the chance for an S to contract the disease after contact with an I: $\rho$
We can combine the last two into $\beta = r * \rho$
So the negative rate of change of the number of S in a population over a unit time is equal to $\beta * \frac{I(t)}{N}$
Hence we can formulate this mathematically as follows:
$\Delta S = -\beta \frac{I}{N} * \Delta T$
or
$\frac{dS}{dt}=-\beta\frac{I(t)}{N}$
E → I
We have seen above how each individual goes from E to I.
On a population level, the number of E changes in two ways:
- new additions following S → E
- reduction following E → I
We know the number from S → E is: $\beta\frac{I(t)}{N}$
How can we model the number of E → I? While individually this is complicated to model and pertains to the specific probability distribution, on a population level we can use the avergae time it takes - this is what most models do.
Let's say average time from E → I is $\frac{1}{\sigma}$, then we know every unit time, we have $\sigma * E$ transitions from E → I.
Mathematically, we write this as :
$\Delta E = (\beta \frac{I}{N}-\sigma E) \Delta T$
or
$\frac{dE}{dt}=\beta\frac{I(t)}{N} - \sigma E(t)$
I → R
Similaryl as above, we have seen above how each individual goes from I to R but this does not tell us about the population level dynamics.
On a population level, the number of I changes in two ways:
- new additions following E → I
- reduction following I → R
We know the number from E → I is: $\sigma E(t)$
How can we model the number of I → R? While indifivudally this is complicated again, on a population level, how about averaging it out, this is what most models do. Let's say average time from I → R is $\frac{1}{\gamma}$, then we have:
$\Delta I = (\sigma E - \gamma I) \Delta T$
or
$\frac{dI}{dt}=\sigma E(t) - \gamma I(t)$
R → R
Finally, we can model the number of individuals in R state with the following equation:
$\Delta R = \gamma I \Delta T$
or
$\frac{dR}{dt}=\gamma I(t)$
import numpy as np
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
# Let's build a numerical solution
def seir_model(init, parms, days):
S_0, E_0, I_0, R_0 = init
Epd, Ipd, Rpd = [0], [0], [0]
S, E, I, R = [S_0], [E_0], [I_0], [R_0]
dt=0.1
t = np.linspace(0,days,int(days/dt))
sigma, beta, gam = parms
for _ in t[1:]:
next_S = S[-1] - beta*S[-1]*I[-1]*dt
Epd.append(beta*S[-1]*I[-1]*dt)
next_E = E[-1] + (beta*S[-1]*I[-1] - sigma*E[-1])*dt
Ipd.append(sigma*E[-1]*dt)
next_I = I[-1] + (sigma*E[-1] - gam*I[-1])*dt
Rpd.append(gam*I[-1]*dt)
next_R = R[-1] + (gam*I[-1])*dt
S.append(next_S)
E.append(next_E)
I.append(next_I)
R.append(next_R)
return np.stack([S, E, I, R, Epd, Ipd, Rpd]).T
Parameters used for plot below:
- Days = 100
- Population = 10000
- Number of susceptible people on day 0 = 9999
- Number of exposed people on day 0 = 1
- No infected or recovered people on day 0
- $\sigma = 0.2$ (average of 5 days to go from exposed to infectious)
- $\beta = 1.75$ (average of $r=7$ contacts per day and $\rho = 25\%$ chance of a contact with an infectious person resulting in infection)
- $\gamma = 0.1$ (average of 10 days to go from infectious to recovered)
We notice a few things from the plot above on the impact of $\beta$:
- The higher $\beta$ is:
- the faster the epidemic seems to propogate in the population
- the higher the peak of infected individuals seems to be (meaning a higher chance hospital resources will be saturated)
- $\beta$ also appears to affect the overall number of people infected over the course of the epidemic - at one point, and keeping all other parameters the same, we reach a point with $\beta < 0.1$ where no epidemic even occurs (more on this later)
What effect does the average time from E → I have on the model?
## Let's try to see how the model changes
days = 1000
N = 10000
init = 1 - 1/N, 1/N, 0, 0
sigma_fast = 1 # 1 --> Average 1 day from E --> I (ressembles SIR model)
sigma_slow = 1/100 #10 days on average, twice as long as COVID-19
sigma_avg = 1/5
beta = 1.75
gam = 0.1 # 1/10 --> 10 days avergae I --> R
parms_fastEI = sigma_fast, beta, gam
parms_slowEI = sigma_slow, beta, gam
parms_avg = sigma_avg, beta, gam
# Run simulation
results_fastEtoI = seir_model(init, parms_fastEI, days)
results_slowEtoI = seir_model(init, parms_slowEI, days)
results_avg = seir_model(init, parms_avg, days)
fig = go.Figure(data=[
go.Scatter(name='S_avg', x=np.linspace(0,days,days*10), y=results_avg.T[0], line={'dash':'solid', 'color':'blue'}),
go.Scatter(name='E_avg', x=np.linspace(0,days,days*10), y=results_avg.T[1], line={'dash':'solid', 'color':'yellow'}),
go.Scatter(name='I_avg', x=np.linspace(0,days,days*10), y=results_avg.T[2], line={'dash':'solid', 'color':'red'}),
go.Scatter(name='R_avg', x=np.linspace(0,days,days*10), y=results_avg.T[3], line={'dash':'solid', 'color':'green'}),
go.Scatter(name='S_fastEtoI', x=np.linspace(0,days,days*10), y=results_fastEtoI.T[0], line={'dash':'dash','color':'blue'}),
go.Scatter(name='E_fastEtoI', x=np.linspace(0,days,days*10), y=results_fastEtoI.T[1], line={'dash':'dash', 'color':'yellow'}),
go.Scatter(name='I_fastEtoI', x=np.linspace(0,days,days*10), y=results_fastEtoI.T[2], line={'dash':'dash', 'color':'red'}),
go.Scatter(name='R_fastEtoI', x=np.linspace(0,days,days*10), y=results_fastEtoI.T[3], line={'dash':'dash', 'color':'green'}),
go.Scatter(name='S_slowEtoI', x=np.linspace(0,days,days*10), y=results_slowEtoI.T[0], line={'dash':'dot', 'color':'blue'}),
go.Scatter(name='E_slowEtoI', x=np.linspace(0,days,days*10), y=results_slowEtoI.T[1], line={'dash':'dot', 'color':'yellow'}),
go.Scatter(name='I_slowEtoI', x=np.linspace(0,days,days*10), y=results_slowEtoI.T[2], line={'dash':'dot', 'color':'red'}),
go.Scatter(name='R_slowEtoI', x=np.linspace(0,days,days*10), y=results_slowEtoI.T[3], line={'dash':'dot', 'color':'green'}),
])
fig.show()
We notice a few things from the plot above on the impact of the average time from E → I:
- The shorter the time, on average, from E → I:
- the faster the epidemic propogates in the population
- the higher the peak of infected individuals will be (meaning a higher chance hospital resources will be saturated)
- However, the time from E → I has no impact on the total number of individuals infected over the entire time of the epidemic (all are enventually infected in any case with these parameters)
What effect does time from I → R have on the model?
## Let's try to see how the model changes
days = 1000
N = 10000
init = 1 - 1/N, 1/N, 0, 0
sigma_avg = 1/5
beta = 1.75
gam_avg = 1/10 # 1/10 --> 10 days average I --> R
gam_slow = 1/100 # 1/30 --> 30 days average I --> R
gam_fast = 1.8 # 1 --> 1 day average I --> R
parms_fastIR = sigma_avg, beta, gam_fast
parms_slowIR = sigma_avg, beta, gam_slow
parms_avg = sigma_avg, beta, gam_avg
# Run simulation
results_fastItoR = seir_model(init, parms_fastIR, days)
results_slowItoR = seir_model(init, parms_slowIR, days)
results_avg = seir_model(init, parms_avg, days)
fig = go.Figure(data=[
go.Scatter(name='S_avg', x=np.linspace(0,days,days*10), y=results_avg.T[0], line={'dash':'solid', 'color':'blue'}),
go.Scatter(name='E_avg', x=np.linspace(0,days,days*10), y=results_avg.T[1], line={'dash':'solid', 'color':'yellow'}),
go.Scatter(name='I_avg', x=np.linspace(0,days,days*10), y=results_avg.T[2], line={'dash':'solid', 'color':'red'}),
go.Scatter(name='R_avg', x=np.linspace(0,days,days*10), y=results_avg.T[3], line={'dash':'solid', 'color':'green'}),
go.Scatter(name='S_fastItoR', x=np.linspace(0,days,days*10), y=results_fastItoR.T[0], line={'dash':'dash','color':'blue'}),
go.Scatter(name='E_fastItoR', x=np.linspace(0,days,days*10), y=results_fastItoR.T[1], line={'dash':'dash', 'color':'yellow'}),
go.Scatter(name='I_fastItoR', x=np.linspace(0,days,days*10), y=results_fastItoR.T[2], line={'dash':'dash', 'color':'red'}),
go.Scatter(name='R_fastItoR', x=np.linspace(0,days,days*10), y=results_fastItoR.T[3], line={'dash':'dash', 'color':'green'}),
go.Scatter(name='S_slowItoR', x=np.linspace(0,days,days*10), y=results_slowItoR.T[0], line={'dash':'dot', 'color':'blue'}),
go.Scatter(name='E_slowItoR', x=np.linspace(0,days,days*10), y=results_slowItoR.T[1], line={'dash':'dot', 'color':'yellow'}),
go.Scatter(name='I_slowItoR', x=np.linspace(0,days,days*10), y=results_slowItoR.T[2], line={'dash':'dot', 'color':'red'}),
go.Scatter(name='R_slowItoR', x=np.linspace(0,days,days*10), y=results_slowItoR.T[3], line={'dash':'dot', 'color':'green'}),
])
fig.show()
We notice a few things from the plot above on the impact of the average time from I → R:
- The longer the time, on average, from I → R:
- the faster the epidemic propogates in the population
- the higher the peak of infected individuals will be (meaning a higher chance hospital resources will be saturated)
- As opposed to the time from E → I, the time from I → R has an impact on the total number of individuals infected over the entire time of the epidemic (with no epidemic if $\gamma > 1.75$ and all other parameters are kept constant)
So we can see the time periods for E → I and I → R, along with the value of $\beta$ are critical components in how the model will react.
Notably, no epidemic occurs if $\beta < \gamma$ (or if $\frac{\beta}{\gamma} < 1$)
In fact, $\frac{\beta}{\gamma}$ has major implications for the model and is called the basic reproduction number $R_0$.
Hence, if $R_0 < 1$ no epidemic occurs, while the opposite implies an epidemic.
While arguments can be made that the compartments themselves don't reflect the reality of COVID-19, this is not the point of this discussion; I want to focus on the idea that the population level dynamics forget about the individual progression of the disease.
Let's have a look at the individual progression of disease to understand what is wrong.
E → I
The time period from exposed to infectious is not instant. Once a person is exposed, it takes some days for them to become infectious.
The research done at this day does give an average of 5 days, but no individual can be infectious 1 day after exposure for example.
Research also suggests 95% of exposed people will become infectious within 14 days.
From this data, we can start to think about the probability distribution for E → I.
A simple trick is to use the same numerical model, but set different parameters.
In order to see the distribution of E → I, we set the initial number of E to be the same as the population, and plot the number of E over time as below: (while the analytic solution is the exponential distribution, numerically it only approximates it)
The above graph shows some interesting insights in the models used above.
- About 20% of the exposed become infectious after 1 day
- the median is 3.3 days
- the mean is about 5 days
- 95% within 15 days
While this may be close to reality for COVID-19, a likelier distribution of time spend in E state before going to I state is either a gamma distribution or Weibull distribution.
days = np.arange(30)
cdf = pd.DataFrame({
'Exponential': expon.cdf(days,loc=0,scale=6),
'Poisson': poisson.cdf(days,4.2, loc=0.8),
'Gamma': gamma.cdf(days,1.8,loc=0.8,scale=3),
'Weibull': weibull_min.cdf(days,1.1, loc=0.8, scale=5)
})
fig = go.Figure(data=[
go.Scatter(name='Expon', x=days, y=cdf.Exponential),
go.Scatter(name='Gamma', x=days, y=cdf.Gamma),
go.Scatter(name='Weibull', x=days, y=cdf.Weibull)
])
fig.update_layout(
title='Number of infectious over time when all population is exposed on day 0',
xaxis_title='Days',
yaxis_title='Percent of exposed having become infectious'
)
fig.show()
The time period from infectious to recovered is not instant either.
Once a person is infectious, it takes some days for them to fight off the infection or die.
Research shows an average of 11 days, but no individual can be recovered 1 day after being infectious for example.
Research also suggests 95% of infectious people recover or die within 18 days. From this data, we can start to think about the probability distribution for I → R.
Let's have a look at multiple distributions.
days = np.arange(30)
df = pd.DataFrame({
'Exponential': expon.rvs(loc=0,scale=11, size=10000),
'Gamma': gamma.rvs(7, loc=4,scale=1.1, size=10000),
'Weibull': weibull_min.rvs(2.5, loc=6, scale=6.7, size=10000)
})
print(df.mean())
print(df.quantile(q=0.95))
print(df.median())
days = np.arange(30)
cdf = pd.DataFrame({
'Exponential': expon.cdf(days,loc=0,scale=11),
'Gamma': gamma.cdf(days,7,loc=4,scale=1.1),
'Weibull': weibull_min.cdf(days,2.5, loc=6, scale=6.7)
})
fig = go.Figure(data=[
go.Scatter(name='Expon I --> R', x=days, y=cdf.Exponential),
go.Scatter(name='Gamma I --> R', x=days, y=cdf.Gamma),
go.Scatter(name='Weibull I --> R', x=days, y=cdf.Weibull)
])
fig.show()
We can see here the actual distribution is very important and averaging will not be enough to have a proper model.
Let's see if we can create a stochastic model with this information.
# Need this new function for model below:
def make_dfE(p,num_E):
df = pd.DataFrame(np.full((p,1), 'S').T[0], columns=['State'])
df['Day'] = 0
df.loc[rng.choice(p, size=num_E, replace=False),'State'] = 'E'
return df
# Need this new function for model below:
def make_dfI(p,num_E):
df = pd.DataFrame(np.full((p,1), 'S').T[0], columns=['State'])
df['Day'] = 0
df.loc[rng.choice(p, size=num_E, replace=False),'State'] = 'I'
return df
def SEIR_modelE(beta, p, num_E, days, gamexp1, gamexp2):
df = make_dfI(p,num_E)
xxbeta=np.array([],dtype=float)
S=np.array([],dtype=int)
E=np.array([],dtype=int)
I=np.array([],dtype=int)
R=np.array([],dtype=int)
Cases=np.array([],dtype=int)
Spd=np.array([],dtype=int)
Epd=np.array([],dtype=int)
Ipd=np.array([],dtype=int)
Rpd=np.array([],dtype=int)
Casespd=np.array([],dtype=int)
b=beta
rand = np.random.random(size=(p,days))
if gamexp1 == 'expon':
EtoI = expon.rvs(loc=0,scale=5,size=p)
else:
EtoI = gamma.rvs(1.8,loc=0.8,scale=3,size=p)
if gamexp2 == 'expon':
ItoR = expon.rvs(loc=0,scale=11,size=p)
else:
ItoR = gamma.rvs(7,loc=4,scale=1.1,size=p)
for j in range(0,days-1):
#xxbeta=np.append(beta, b)
StoE_index = df.loc[(df.State == 'S') & (rand[:,j] < b[j]*len(np.where(df.State=='I')[0])/p)].index
EtoI_index = df.loc[(df.State == 'E') & (j-df.Day >= EtoI)].index
ItoR_index = df.loc[(df.State == 'I') & (j-df.Day >= ItoR)].index
Epd = np.append(Epd,len(StoE_index))
Ipd = np.append(Ipd,len(EtoI_index))
Rpd = np.append(Rpd,len(ItoR_index))
df.iloc[ItoR_index] = ['R', j]
df.iloc[EtoI_index] = ['I', j]
df.iloc[StoE_index] = ['E', j]
S=np.append(S,len(np.where(df.State=='S')[0]))
E=np.append(E,len(np.where(df.State=='E')[0]))
I=np.append(I,len(np.where(df.State=='I')[0]))
R=np.append(R,len(np.where(df.State=='R')[0]))
# if ((I[-1] > 1000) & (Ipd[-1] > 399)):
# b = beta2
# elif ((I[-1] > 1000) & (Ipd[-1] < 400)):
# b = beta3
Epd[0]+=num_E
return S,E,I,R, Epd, Ipd, Rpd, xxbeta
from numpy.random import default_rng
rng = default_rng()
# Define parameters
days1 = 100
p1 = 10000
num_E1 = 1
beta1 = 1.75
#beta = 3*np.ones(days) + (np.arange(days) * np.random.normal(0,1,days)/100)
#beta = np.tile(beta, (10000, 1)).T
#beta = beta+0.1-beta*expon.cdf(np.arange(days),loc=28,scale=1)
beta1 = beta1*np.ones(days1)
# Run simulation
results_stoch0 = SEIR_modelE(beta1, p1, num_E1, days1, 1, 1)
#results_stoch1 = SEIR_modelE(beta1, p1, num_E1, days1, 'expon', 'expon')
#results_stoch2 = SEIR_modelE(beta1, p1, num_E1, days1, 'expon', 1)
#results_stoch3 = SEIR_modelE(beta1, p1, num_E1, days1, 1, 'expon')
%%timeit
results_stoch0 = SEIR_modelE(beta1, p1, num_E1, days1, 1, 1)
# Comparing with previous average deterministic model
# Define parameters
days = 100
N = 10000
init = 1 - 1/N, 1/N, 0, 0
sigma = 0.2 # 1/5 --> 5 days on average to go from E --> I
beta = 1.75
gam = 1/11 # 1/11 --> 11 days on average to go from I --> R
parms = sigma, beta, gam
# Run simulation
results_avg = seir_model(init, parms, days)
%%timeit
results_avg = seir_model(init, parms, days)
fig = go.Figure(data=[
go.Scatter(name='S_stoch0', x=np.arange(len(results_stoch1[0])), y=results_stoch0[0]/p1, line={'dash':'dash', 'color':'blue'}),
go.Scatter(name='E_stoch0', x=np.arange(len(results_stoch1[0])), y=results_stoch0[1]/p1, line={'dash':'dash', 'color':'yellow'}),
go.Scatter(name='I_stoch0', x=np.arange(len(results_stoch1[0])), y=results_stoch0[2]/p1, line={'dash':'dash', 'color':'red'}),
go.Scatter(name='R_stoch0', x=np.arange(len(results_stoch1[0])), y=results_stoch0[3]/p1, line={'dash':'dash', 'color':'green'}),
go.Scatter(name='S_avg', x=np.linspace(0,days,days*10), y=results_avg.T[0], line={'dash':'solid', 'color':'blue'}),
go.Scatter(name='E_avg', x=np.linspace(0,days,days*10), y=results_avg.T[1], line={'dash':'solid', 'color':'yellow'}),
go.Scatter(name='I_avg', x=np.linspace(0,days,days*10), y=results_avg.T[2], line={'dash':'solid', 'color':'red'}),
go.Scatter(name='R_avg', x=np.linspace(0,days,days*10), y=results_avg.T[3], line={'dash':'solid', 'color':'green'}),
])
fig.show()
fig = go.Figure(data=[
go.Scatter(name='S_stoch0', x=np.arange(len(results_stoch1[0])), y=results_stoch0[0]/p1, line={'dash':'dash', 'color':'blue'}),
go.Scatter(name='E_stoch0', x=np.arange(len(results_stoch1[0])), y=results_stoch0[1]/p1, line={'dash':'dash', 'color':'yellow'}),
go.Scatter(name='I_stoch0', x=np.arange(len(results_stoch1[0])), y=results_stoch0[2]/p1, line={'dash':'dash', 'color':'red'}),
go.Scatter(name='R_stoch0', x=np.arange(len(results_stoch1[0])), y=results_stoch0[3]/p1, line={'dash':'dash', 'color':'green'}),
go.Scatter(name='S_stoch1', x=np.arange(len(results_stoch1[0])), y=results_stoch1[0]/p1, line={'dash':'dot', 'color':'blue'}),
go.Scatter(name='E_stoch1', x=np.arange(len(results_stoch1[0])), y=results_stoch1[1]/p1, line={'dash':'dot', 'color':'yellow'}),
go.Scatter(name='I_stoch1', x=np.arange(len(results_stoch1[0])), y=results_stoch1[2]/p1, line={'dash':'dot', 'color':'red'}),
go.Scatter(name='R_stoch1', x=np.arange(len(results_stoch1[0])), y=results_stoch1[3]/p1, line={'dash':'dot', 'color':'green'}),
go.Scatter(name='S_stoch2', x=np.arange(len(results_stoch1[0])), y=results_stoch2[0]/p1, line={'dash':'dash', 'color':'blue'}),
go.Scatter(name='E_stoch2', x=np.arange(len(results_stoch1[0])), y=results_stoch2[1]/p1, line={'dash':'dash', 'color':'yellow'}),
go.Scatter(name='I_stoch2', x=np.arange(len(results_stoch1[0])), y=results_stoch2[2]/p1, line={'dash':'dash', 'color':'red'}),
go.Scatter(name='R_stoch2', x=np.arange(len(results_stoch1[0])), y=results_stoch2[3]/p1, line={'dash':'dash', 'color':'green'}),
go.Scatter(name='S_stoch3', x=np.arange(len(results_stoch1[0])), y=results_stoch3[0]/p1, line={'dash':'dot', 'color':'blue'}),
go.Scatter(name='E_stoch3', x=np.arange(len(results_stoch1[0])), y=results_stoch3[1]/p1, line={'dash':'dot', 'color':'yellow'}),
go.Scatter(name='I_stoch3', x=np.arange(len(results_stoch1[0])), y=results_stoch3[2]/p1, line={'dash':'dot', 'color':'red'}),
go.Scatter(name='R_stoch3', x=np.arange(len(results_stoch1[0])), y=results_stoch3[3]/p1, line={'dash':'dot', 'color':'green'}),
go.Scatter(name='S_avg', x=np.linspace(0,days,days*10), y=results_avg.T[0], line={'dash':'solid', 'color':'blue'}),
go.Scatter(name='E_avg', x=np.linspace(0,days,days*10), y=results_avg.T[1], line={'dash':'solid', 'color':'yellow'}),
go.Scatter(name='I_avg', x=np.linspace(0,days,days*10), y=results_avg.T[2], line={'dash':'solid', 'color':'red'}),
go.Scatter(name='R_avg', x=np.linspace(0,days,days*10), y=results_avg.T[3], line={'dash':'solid', 'color':'green'}),
])
fig.show()
days=20
p1=100000
num_E1 = 100000
N = 100000
init = 0, 0, N, 0
beta1 = 1.75*np.ones(days)
sigma = 0.2 # 1/5 --> 5 days on average to go from E --> I
beta = 1.75
gam = 1/11 # 1/11 --> 11 days on average to go from I --> R
parms = sigma, beta, gam
results_11 = seir_model(init, parms, days)
results_22 = SEIR_modelE(beta1, p1, num_E1, days)
fig = go.Figure(data=[
go.Scatter(name='E_stoch', x=np.arange(len(results_22[2])), y=(results_22[2])/p1, line={'dash':'dash', 'color':'yellow'}),
go.Scatter(name='E_det', x=np.linspace(0,days,days*10), y=results_11.T[2]/N, line={'dash':'solid', 'color':'yellow'})
])
fig.show()